Goto

Collaborating Authors

 augmented class



An Unbiased Risk Estimator for Learning with Augmented Classes

Neural Information Processing Systems

This paper studies the problem of learning with augmented classes (LAC), where augmented classes unobserved in the training data might emerge in the testing phase. Previous studies generally attempt to discover augmented classes by exploiting geometric properties, achieving inspiring empirical performance yet lacking theoretical understandings particularly on the generalization ability. In this paper we show that, by using unlabeled training data to approximate the potential distribution of augmented classes, an unbiased risk estimator of the testing distribution can be established for the LAC problem under mild assumptions, which paves a way to develop a sound approach with theoretical guarantees. Moreover, the proposed approach can adapt to complex changing environments where augmented classes may appear and the prior of known classes may change simultaneously. Extensive experiments confirm the effectiveness of our proposed approach.



Backtranslation and paraphrasing in the LLM era? Comparing data augmentation methods for emotion classification

Radliński, Łukasz, Guściora, Mateusz, Kocoń, Jan

arXiv.org Artificial Intelligence

Numerous domain-specific machine learning tasks struggle with data scarcity and class imbalance. This paper systematically explores data augmentation methods for NLP, particularly through large language models like GPT. The purpose of this paper is to examine and evaluate whether traditional methods such as paraphrasing and backtranslation can leverage a new generation of models to achieve comparable performance to purely generative methods. Methods aimed at solving the problem of data scarcity and utilizing ChatGPT were chosen, as well as an exemplary dataset. We conducted a series of experiments comparing four different approaches to data augmentation in multiple experimental setups. We then evaluated the results both in terms of the quality of generated data and its impact on classification performance. The key findings indicate that backtranslation and paraphrasing can yield comparable or even better results than zero and a few-shot generation of examples.


On the Learning with Augmented Class via Forests

Xu, Fan, Chen, Wuyang, Gao, Wei

arXiv.org Artificial Intelligence

Decision trees and forests have achieved successes in various real applications, most working with all testing classes known in training data. In this work, we focus on learning with augmented class via forests, where an augmented class may appear in testing data yet not in training data. We incorporate information of augmented class into trees' splitting, that is, augmented Gini impurity, a new splitting criterion is introduced to exploit some unlabeled data from testing distribution. We then develop the Learning with Augmented Class via Forests (short for LACForest) approach, which constructs shallow forests according to the augmented Gini impurity and then splits forests with pseudo-labeled augmented instances for better performance. We also develop deep neural forests via an optimization objective based on our augmented Gini impurity, which essentially utilizes the representation power of neural networks for forests. Theoretically, we present the convergence analysis for our augmented Gini impurity, and we finally conduct experiments to evaluate our approaches.


Review for NeurIPS paper: An Unbiased Risk Estimator for Learning with Augmented Classes

Neural Information Processing Systems

How to use the non-negative risk estimator in this problem? In particular, where to add the max-operator in the risk estimator? I think it is important to clarify this part. I am aware that the choice of loss is identical to Kiryo et al. My question is have you tried different loss functions? For the analysis of infinite-sample consistency in Theorem 1, loss function choice is quite restrictive and does not cover many losses such as the sigmoid loss.


Review for NeurIPS paper: An Unbiased Risk Estimator for Learning with Augmented Classes

Neural Information Processing Systems

Despite a disagreement from R4, there is a consensus among most of knowledgeable reviewers that this is a good paper. After reading the paper, I also concur that the problem considered in this paper is important and the proposed solution is interesting, novel, and simple. Hence, I recommend that the paper is accepted as a poster. This paper considers the problem of learning with augmented class and unlabeled sample (aka open set recognition). That is, the authors assume that at test time a new class which is not available at training time can emerge.


An Unbiased Risk Estimator for Learning with Augmented Classes

Neural Information Processing Systems

This paper studies the problem of learning with augmented classes (LAC), where augmented classes unobserved in the training data might emerge in the testing phase. Previous studies generally attempt to discover augmented classes by exploiting geometric properties, achieving inspiring empirical performance yet lacking theoretical understandings particularly on the generalization ability. In this paper we show that, by using unlabeled training data to approximate the potential distribution of augmented classes, an unbiased risk estimator of the testing distribution can be established for the LAC problem under mild assumptions, which paves a way to develop a sound approach with theoretical guarantees. Moreover, the proposed approach can adapt to complex changing environments where augmented classes may appear and the prior of known classes may change simultaneously. Extensive experiments confirm the effectiveness of our proposed approach.


An Unbiased Risk Estimator for Partial Label Learning with Augmented Classes

Hu, Jiayu, Shu, Senlin, Li, Beibei, Xiang, Tao, He, Zhongshi

arXiv.org Machine Learning

Partial Label Learning (PLL) is a typical weakly supervised learning task, which assumes each training instance is annotated with a set of candidate labels containing the ground-truth label. Recent PLL methods adopt identification-based disambiguation to alleviate the influence of false positive labels and achieve promising performance. However, they require all classes in the test set to have appeared in the training set, ignoring the fact that new classes will keep emerging in real applications. To address this issue, in this paper, we focus on the problem of Partial Label Learning with Augmented Class (PLLAC), where one or more augmented classes are not visible in the training stage but appear in the inference stage. Specifically, we propose an unbiased risk estimator with theoretical guarantees for PLLAC, which estimates the distribution of augmented classes by differentiating the distribution of known classes from unlabeled data and can be equipped with arbitrary PLL loss functions. Besides, we provide a theoretical analysis of the estimation error bound of the estimator, which guarantees the convergence of the empirical risk minimizer to the true risk minimizer as the number of training data tends to infinity. Furthermore, we add a risk-penalty regularization term in the optimization objective to alleviate the influence of the over-fitting issue caused by negative empirical risk. Extensive experiments on benchmark, UCI and real-world datasets demonstrate the effectiveness of the proposed approach.


A Generalized Unbiased Risk Estimator for Learning with Augmented Classes

Shu, Senlin, He, Shuo, Wang, Haobo, Wei, Hongxin, Xiang, Tao, Feng, Lei

arXiv.org Artificial Intelligence

Machine learning approaches have achieved great performance on a variety of tasks, and most of them focus on the stationary learning environment. However, the learning environment in many real-world scenarios could be open and change gradually, which requires the learning approaches to have the ability of handling the distribution change in the non-stationary environment [1-4]. This paper considers a specific problem where the class distribution changes from the training phase to the test phase, called learning with augmented classes (LAC). In LAC, some augmented classes unobserved in the training phase might emerge in the test phase. In order to make accurate and reliable predictions, the learning model is required to distinguish augmented classes and keep good generalization performance over the test distribution. The major difficulty in LAC is how to exploit the relationships between known and augmented classes. To overcome this difficulty, various learning methods have been proposed. For example, by learning a compact geometric description of known classes to distinguish augmented classes that are far away from the description, the anomaly detection or novelty detection methods can be used (e.g., iForest [5], one-class SVM [6, 7], and kernel density estimation [8, 9]). By exploiting unlabeled data with the low-density separation assumption to adjust the classification decision boundary [10], the performance of LAC can be empirically improved.